liquid neural network
Optimizing Multi-Tier Supply Chain Ordering with LNN+XGBoost: Mitigating the Bullwhip Effect
Supply chain management faces significant challenges, including demand fluctuations, inventory imbalances, and amplified upstream order variability due to the bullwhip effect. Traditional methods, such as simple moving averages, struggle to address dynamic market conditions. Emerging machine learning techniques, including LSTM, reinforcement learning, and XGBoost, offer potential solutions but are limited by computational complexity, training inefficiencies, or constraints in time-series modeling. Liquid Neural Networks, inspired by dynamic biological systems, present a promising alternative due to their adaptability, low computational cost, and robustness to noise, making them suitable for real-time decision-making and edge computing. Despite their success in applications like autonomous vehicles and medical monitoring, their potential in supply chain optimization remains underexplored. This study introduces a hybrid LNN and XGBoost model to optimize ordering strategies in multi-tier supply chains. By leveraging LNN's dynamic feature extraction and XGBoost's global optimization capabilities, the model aims to mitigate the bullwhip effect and enhance cumulative profitability. The research investigates how local and global synergies within the hybrid framework address the dual demands of adaptability and efficiency in SCM. The proposed approach fills a critical gap in existing methodologies, offering an innovative solution for dynamic and efficient supply chain management.
- North America > United States > Maryland > Prince George's County > College Park (0.14)
- North America > United States > Alaska > Anchorage Municipality > Anchorage (0.04)
Accuracy, Memory Efficiency and Generalization: A Comparative Study on Liquid Neural Networks and Recurrent Neural Networks
Zong, Shilong, Bierly, Alex, Boker, Almuatazbellah, Eldardiry, Hoda
This review aims to conduct a comparative analysis of liquid neural networks (LNNs) and traditional recurrent neural networks (RNNs) and their variants, such as long short-term memory networks (LSTMs) and gated recurrent units (GRUs). The core dimensions of the analysis include model accuracy, memory efficiency, and generalization ability. By systematically reviewing existing research, this paper explores the basic principles, mathematical models, key characteristics, and inherent challenges of these neural network architectures in processing sequential data. Research findings reveal that LNN, as an emerging, biologically inspired, continuous-time dynamic neural network, demonstrates significant potential in handling noisy, non-stationary data, and achieving out-of-distribution (OOD) generalization. Additionally, some LNN variants outperform traditional RNN in terms of parameter efficiency and computational speed. However, RNN remains a cornerstone in sequence modeling due to its mature ecosystem and successful applications across various tasks. This review identifies the commonalities and differences between LNNs and RNNs, summarizes their respective shortcomings and challenges, and points out valuable directions for future research, particularly emphasizing the importance of improving the scalability of LNNs to promote their application in broader and more complex scenarios.
- Research Report (0.64)
- Overview (0.54)
Liquid AI Is Redesigning the Neural Network
Artificial intelligence might now be solving advanced math, performing complex reasoning, and even using personal computers, but today's algorithms could still learn a thing or two from microscopic worms. Liquid AI, a startup spun out of MIT, will today reveal several new AI models based on a novel type of "liquid" neural network that has the potential to be more efficient, less power-hungry, and more transparent than the ones that underpin everything from chatbots to image generators to facial recognition systems. Liquid AI's new models include one for detecting fraud in financial transactions, another for controlling self-driving cars, and a third for analyzing genetic data. The company touted the new models, which it is licensing to outside companies, at an event held at MIT today. The company has received funding from investors that include Samsung and Shopify, both of which are also testing its technology.
GENIE-NF-AI: Identifying Neurofibromatosis Tumors using Liquid Neural Network (LTC) trained on AACR GENIE Datasets
Bidollahkhani, Michael, Atasoy, Ferhat, Abedini, Elnaz, Davar, Ali, Hamza, Omid, Sefaoğlu, Fırat, Jafari, Amin, Yalçın, Muhammed Nadir, Abdellatef, Hamdan
In recent years, the field of medicine has been increasingly adopting artificial intelligence (AI) technologies to provide faster and more accurate disease detection, prediction, and assessment. In this study, we propose an interpretable AI approach to diagnose patients with neurofibromatosis using blood tests and pathogenic variables. We evaluated the proposed method using a dataset from the AACR GENIE project and compared its performance with modern approaches. Our proposed approach outperformed existing models with 99.86% accuracy. We also conducted NF1 and interpretable AI tests to validate our approach. Our work provides an explainable approach model using logistic regression and explanatory stimulus as well as a black-box model. The explainable models help to explain the predictions of black-box models while the glass-box models provide information about the best-fit features. Overall, our study presents an interpretable AI approach for diagnosing patients with neurofibromatosis and demonstrates the potential of AI in the medical field.
- Asia > Middle East > Republic of Türkiye > Karabuk Province > Karabuk (0.05)
- Europe > Germany > Lower Saxony > Gottingen (0.04)
- North America > United States (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Therapeutic Area > Genetic Disease (1.00)
Drones navigate unseen environments with liquid neural networks
Makram Chahine, a PhD student in electrical engineering and computer science and an MIT CSAIL affiliate, leads a drone used to test liquid neural networks. In the vast, expansive skies where birds once ruled supreme, a new crop of aviators is taking flight. These pioneers of the air are not living creatures, but rather a product of deliberate innovation: drones. Rather, they're avian-inspired marvels that soar through the sky, guided by liquid neural networks to navigate ever-changing and unseen environments with precision and ease. Inspired by the adaptable nature of organic brains, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have introduced a method for robust flight navigation agents to master vision-based fly-to-target tasks in intricate, unfamiliar environments.
Drones may better navigate unfamiliar surroundings with the help of liquid neural networks
Drones have a wide range of applications, but sending them into unfamiliar environments can be a challenge. Whether delivering a package, monitoring wildlife or conducting search and rescue missions, knowing how to navigate previously unseen surroundings (or ones that have changed significantly) is critical for a drone to effectively complete tasks. Researchers at the Massachusetts Institute of Technology (MIT) believe they've found a more effective way of helping drones fly through unknown spaces, thanks to liquid neural networks. MIT created its liquid neural networks -- which are inspired by the adaptability of organic brains -- in 2021. The artificial intelligence and machine learning algorithms are able to learn and adapt to new data in the real world, not only while they're being trained.
Solving brain dynamics gives rise to flexible machine-learning models
Last year, MIT researchers announced that they had built "liquid" neural networks, inspired by the brains of small species: a class of flexible, robust machine learning models that learn on the job and can adapt to changing conditions, for real-world safety-critical tasks, like driving and flying. The flexibility of these "liquid" neural nets meant boosting the bloodline to our connected world, yielding better decision-making for many tasks involving time-series data, such as brain and heart monitoring, weather forecasting, and stock pricing. But these models become computationally expensive as their number of neurons and synapses increase and require clunky computer programs to solve their underlying, complicated math. And all of this math, similar to many physical phenomena, becomes harder to solve with size, meaning computing lots of small steps to arrive at a solution. Now, the same team of scientists has discovered a way to alleviate this bottleneck by solving the differential equation behind the interaction of two neurons through synapses to unlock a new type of fast and efficient artificial intelligence algorithms.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.40)
- Europe > Denmark > North Jutland > Aalborg (0.05)
- Europe > Austria > Vienna (0.05)
New 'Liquid' AI Learns Continuously From Its Experience of the World
In the animal kingdom, brains come in all shapes and sizes. So, in a new machine learning approach, engineers did away with the human brain and all its beautiful complexity--turning instead to the brain of a lowly worm for inspiration. Turns out, simplicity has its benefits. The resulting neural network is efficient, transparent, and here's the kicker: It's a lifelong learner. Whereas most machine learning algorithms can't hone their skills beyond an initial training period, the researchers say the new approach, called a liquid neural network, has a kind of built-in "neuroplasticity." That is, as it goes about its work--say, in the future, maybe driving a car or directing a robot--it can learn from experience and adjust its connections on the fly.
Why is 'Liquid' Neural Network from MIT a Revolutionary Innovation?
The tech world is brimming with updates about the latest innovations on the artificial intelligence front. Applications like machine learning, computer vision, deep learning, natural language processing and neural network are being deployed and influencing the bottom line of several industry verticals. Out of these, neural network has gained status of great interest in the scientific community, for being inspired from biological nervous system process information. By emulating the human brain functions, it helps in developing computational models that are equipped for pattern recognition and at last, deliberation of new information. The term neural network has been derived from the work of a neuroscientist, Warren S. McCulloch and a logician, Walter Pitts.